116 research outputs found

    Smaller = denser, and the brain knows it: natural statistics of object density shape weight expectations.

    Get PDF
    If one nondescript object's volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers' predictions about objects' weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, "twice as large" doesn't mean "twice as heavy": Smaller objects are typically denser, following a power function of volume. Interestingly, this "smaller is denser" relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions quantitatively match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems

    Visual rhythm perception improves through auditory but not visual training

    Get PDF
    SummaryMemory research has shown that test performance is optimal when testing and practice occur in identical contexts [1]. However, recent research in object recognition and perceptual learning has shown that multisensory practice leads to improved test performance, even when the test is unisensory [2,3]. It is also known that different sensory modalities can have differing proficiencies in a given domain. For instance, research shows that, compared to the auditory modality, the visual modality is significantly less proficient at discriminating the rhythms of temporal sequences [4,5]. Although rhythm perception is typically thought of as residing in the auditory domain, instances of visual rhythm perception abound in daily life, for example, when one watches a dancer or a drummer, or when a doctor examines a patient’s breathing or heart rate on a monitor (such as when diagnosing arrhythmia). However, no previous study has examined whether visual rhythm discrimination is a trainable perceptual skill. In light of this, we examined the extent to which visual rhythm perception can be improved through two sessions of visual, auditory, or audiovisual training. We found that visual rhythm discrimination was significantly improved in the auditory and audiovisual training groups, but not in the visual training group. Our results show that, in certain tasks, within-modality training may not be the best approach and that, instead, training in a different sensory modality can be a necessary approach to achieve learning

    Bayesian priors are encoded independently from likelihoods in human multisensory perception

    Get PDF
    It has been shown that human combination of crossmodal information is highly consistent with an optimal Bayesian model performing causal inference. These findings have shed light on the computational principles governing crossmodal integration/segregation. Intuitively, in a Bayesian framework priors represent a priori information about the environment, i.e., information available prior to encountering the given stimuli, and are thus not dependent on the current stimuli. While this interpretation is considered as a defining characteristic of Bayesian computation by many, the Bayes rule per se does not require that priors remain constant despite significant changes in the stimulus, and therefore, the demonstration of Bayes-optimality of a task does not imply the invariance of priors to varying likelihoods. This issue has not been addressed before, but here we empirically investigated the independence of the priors from the likelihoods by strongly manipulating the presumed likelihoods (by using two drastically different sets of stimuli) and examining whether the estimated priors change or remain the same. The results suggest that the estimated prior probabilities are indeed independent of the immediate input and hence, likelihood

    Sound-Induced Flash Illusion is Resistant to Feedback Training

    Get PDF
    A single flash accompanied by two auditory beeps tends to be perceived as two flashes (Shams et al. Nature 408:788, 2000, Cogn Brain Res 14:147–152, 2002). This phenomenon is known as ‘sound-induced flash illusion.’ Previous neuroimaging studies have shown that this illusion is correlated with modulation of activity in early visual cortical areas (Arden et al. Vision Res 43(23):2469–2478, 2003; Bhattacharya et al. NeuroReport 13:1727–1730, 2002; Shams et al. NeuroReport 12(17):3849–3852, 2001, Neurosci Lett 378(2):76–81, 2005; Watkins et al. Neuroimage 31:1247–1256, 2006, Neuroimage 37:572–578, 2007; Mishra et al. J Neurosci 27(15):4120–4131, 2007). We examined how robust the illusion is by testing whether the frequency of the illusion can be reduced by providing feedback. We found that the sound-induced flash illusion was resistant to feedback training, except when the amount of monetary reward was made dependent on accuracy in performance. However, even in the latter case the participants reported that they still perceived illusory two flashes even though they correctly reported single flash. Moreover, the feedback training effect seemed to disappear once the participants were no longer provided with feedback suggesting a short-lived refinement of discrimination between illusory and physical double flashes rather than vanishing of the illusory percept. These findings indicate that the effect of sound on the perceptual representation of visual stimuli is strong and robust to feedback training, and provide further evidence against decision factors accounting for the sound-induced flash illusion

    Comparing Bayesian models for multisensory cue combination without mandatory integration

    Get PDF
    Bayesian models of multisensory perception traditionally address the problem of estimating an underlying variable that is assumed to be the cause of the two sensory signals. The brain, however, has to solve a more general problem: it also has to establish which signals come from the same source and should be integrated, and which ones do not and should be segregated. In the last couple of years, a few models have been proposed to solve this problem in a Bayesian fashion. One of these has the strength that it formalizes the causal structure of sensory signals. We first compare these models on a formal level. Furthermore, we conduct a psychophysics experiment to test human performance in an auditory-visual spatial localization task in which integration is not mandatory. We find that the causal Bayesian inference model accounts for the data better than other models

    Characterizing response behavior in multisensory perception with conflicting cues

    Get PDF
    We explore a recently proposed mixture model approach to understanding interactions between conflicting sensory cues. Alternative model formulations, differing in their sensory noise models and inference methods, are compared based on their fit to experimental data. Heavy-tailed sensory likelihoods yield a better description of the subjects' response behavior than standard Gaussian noise models. We study the underlying cause for this result, and then present several testable predictions of these models

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions
    corecore